53 research outputs found

    A Distributed Block-Split Gibbs Sampler with Hypergraph Structure for High-Dimensional Inverse Problems

    Full text link
    Sampling-based algorithms are classical approaches to perform Bayesian inference in inverse problems. They provide estimators with the associated credibility intervals to quantify the uncertainty on the estimators. Although these methods hardly scale to high dimensional problems, they have recently been paired with optimization techniques, such as proximal and splitting approaches, to address this issue. Such approaches pave the way to distributed samplers, splitting computations to make inference more scalable and faster. We introduce a distributed Split Gibbs sampler (SGS) to efficiently solve such problems involving distributions with multiple smooth and non-smooth functions composed with linear operators. The proposed approach leverages a recent approximate augmentation technique reminiscent of primal-dual optimization methods. It is further combined with a block-coordinate approach to split the primal and dual variables into blocks, leading to a distributed block-coordinate SGS. The resulting algorithm exploits the hypergraph structure of the involved linear operators to efficiently distribute the variables over multiple workers under controlled communication costs. It accommodates several distributed architectures, such as the Single Program Multiple Data and client-server architectures. Experiments on a large image deblurring problem show the performance of the proposed approach to produce high quality estimates with credibility intervals in a small amount of time. Codes to reproduce the experiments are available online

    Modeling spatial and temporal variabilities in hyperspectral image unmixing

    Get PDF
    Acquired in hundreds of contiguous spectral bands, hyperspectral (HS) images have received an increasing interest due to the significant spectral information they convey about the materials present in a given scene. However, the limited spatial resolution of hyperspectral sensors implies that the observations are mixtures of multiple signatures corresponding to distinct materials. Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing the data -- referred to as endmembers -- and their relative proportion in each pixel according to a predefined mixture model. In this context, a given material is commonly assumed to be represented by a single spectral signature. This assumption shows a first limitation, since endmembers may vary locally within a single image, or from an image to another due to varying acquisition conditions, such as declivity and possibly complex interactions between the incident light and the observed materials. Unless properly accounted for, spectral variability can have a significant impact on the shape and the amplitude of the acquired signatures, thus inducing possibly significant estimation errors during the unmixing process. A second limitation results from the significant size of HS data, which may preclude the use of batch estimation procedures commonly used in the literature, i.e., techniques exploiting all the available data at once. Such computational considerations notably become prominent to characterize endmember variability in multi-temporal HS (MTHS) images, i.e., sequences of HS images acquired over the same area at different time instants. The main objective of this thesis consists in introducing new models and unmixing procedures to account for spatial and temporal endmember variability. Endmember variability is addressed by considering an explicit variability model reminiscent of the total least squares problem, and later extended to account for time-varying signatures. The variability is first estimated using an unsupervised deterministic optimization procedure based on the Alternating Direction Method of Multipliers (ADMM). Given the sensitivity of this approach to abrupt spectral variations, a robust model formulated within a Bayesian framework is introduced. This formulation enables smooth spectral variations to be described in terms of spectral variability, and abrupt changes in terms of outliers. Finally, the computational restrictions induced by the size of the data is tackled by an online estimation algorithm. This work further investigates an asynchronous distributed estimation procedure to estimate the parameters of the proposed models

    Estimation de variabilité pour le démélange non-supervisé d'images hyperspectrales

    Get PDF
    International audienceLe dĂ©mĂ©lange d’images hyperspectrales vise Ă  identifier les signatures spectrales d’un milieu imagĂ© ainsi que leurs proportions dans chacun des pixels. Toutefois, les signatures extraites prĂ©sentent en pratique une variabilitĂ© qui peut compromettre la fiabilitĂ© de cette identification. En supposant ces signatures potentiellement affectĂ©es par le phĂ©nomĂšne de variabilitĂ©, nous proposons d’estimer les paramĂštres d’un modĂšle de mĂ©lange linĂ©aire Ă  l’aide d’un algorithme de minimisation alternĂ©e (Proximal alternating linearized minimization, PALM) dont la convergence a Ă©tĂ© dĂ©montrĂ©e pour une classe de problĂšmes non-convexes qui contient prĂ©cisĂ©ment le problĂšme du dĂ©mĂ©lange d’images hyperspectrales. La mĂ©thode proposĂ©e est Ă©valuĂ©e sur des donnĂ©es synthĂ©tiques et rĂ©elles

    Hyperspectral unmixing with spectral variability using a perturbed linear mixing model

    Get PDF
    International audienceGiven a mixed hyperspectral data set, linear unmixing aims at estimating the reference spectral signatures composing the data-referred to as endmembers-their abundance fractions and their number. In practice, the identified endmembers can vary spectrally within a given image and can thus be construed as variable instances of reference endmembers. Ignoring this variability induces estimation errors that are propagated into the unmixing procedure. To address this issue, endmember variability estimation consists of estimating the reference spectral signatures from which the estimated endmembers have been derived as well as their variability with respect to these references. This paper introduces a new linear mixing model that explicitly accounts for spatial and spectral endmember variabilities. The parameters of this model can be estimated using an optimization algorithm based on the alternating direction method of multipliers. The performance of the proposed unmixing method is evaluated on synthetic and real data. A comparison with state-of-the-art algorithms designed to model and estimate endmember variability allows the interest of the proposed unmixing solution to be appreciated

    Efficient sampling of non log-concave posterior distributions with mixture of noises

    Full text link
    This paper focuses on a challenging class of inverse problems that is often encountered in applications. The forward model is a complex non-linear black-box, potentially non-injective, whose outputs cover multiple decades in amplitude. Observations are supposed to be simultaneously damaged by additive and multiplicative noises and censorship. As needed in many applications, the aim of this work is to provide uncertainty quantification on top of parameter estimates. The resulting log-likelihood is intractable and potentially non-log-concave. An adapted Bayesian approach is proposed to provide credibility intervals along with point estimates. An MCMC algorithm is proposed to deal with the multimodal posterior distribution, even in a situation where there is no global Lipschitz constant (or it is very large). It combines two kernels, namely an improved version of (Preconditioned Metropolis Adjusted Langevin) PMALA and a Multiple Try Metropolis (MTM) kernel. Whenever smooth, its gradient admits a Lipschitz constant too large to be exploited in the inference process. This sampler addresses all the challenges induced by the complex form of the likelihood. The proposed method is illustrated on classical test multimodal distributions as well as on a challenging and realistic inverse problem in astronomy

    Time-Regularized Blind Deconvolution Approach for Radio Interferometry

    Get PDF

    A perturbed linear mixing model accounting for spectral variability

    Get PDF
    Hyperspectral unmixing aims at determining the reference spectral signatures composing a hyperspectral image, their abundance fractions and their number. In practice, the spectral variability of the identified signatures induces significant abundance estimation errors. To address this issue, this paper introduces a new linear mixing model explicitly accounting for this phenomenon. In this setting, the extracted endmembers are interpreted as possibly corrupted versions of the true endmembers. The parameters of this model can be estimated using an optimization algorithm based on the alternating direction method of multipliers. The performance of the proposed unmixing method is evaluated on synthetic and real data

    Parallel faceted imaging in radio interferometry via proximal splitting (Faceted HyperSARA): when precision meets scalability

    Full text link
    Upcoming radio interferometers are aiming to image the sky at new levels of resolution and sensitivity, with wide-band image cubes reaching close to the Petabyte scale for SKA. Modern proximal optimization algorithms have shown a potential to significantly outperform CLEAN thanks to their ability to inject complex image models to regularize the inverse problem for image formation from visibility data. They were also shown to be scalable to large data volumes thanks to a splitting functionality enabling the decomposition of data into blocks, for parallel processing of block-specific data-fidelity terms of the objective function. In this work, the splitting functionality is further exploited to decompose the image cube into spatio-spectral facets, and enable parallel processing of facet-specific regularization terms in the objective. The resulting Faceted HyperSARA algorithm is implemented in MATLAB (code available on GitHub). Simulation results on synthetic image cubes confirm that faceting can provide a major increase in scalability at no cost in imaging quality. A proof-of-concept reconstruction of a 15 GB image of Cyg A from 7.4 GB of VLA data, utilizing 496 CPU cores on a HPC system for 68 hours, confirms both scalability and a quantum jump in imaging quality from CLEAN. Assuming slow spectral slope of Cyg A, we also demonstrate that Faceted HyperSARA can be combined with a dimensionality reduction technique, enabling utilizing only 31 CPU cores for 142 hours to form the Cyg A image from the same data, while preserving reconstruction quality. Cyg A reconstructed cubes are available online

    Neural network-based emulation of interstellar medium models

    Full text link
    The interpretation of observations of atomic and molecular tracers in the galactic and extragalactic interstellar medium (ISM) requires comparisons with state-of-the-art astrophysical models to infer some physical conditions. Usually, ISM models are too time-consuming for such inference procedures, as they call for numerous model evaluations. As a result, they are often replaced by an interpolation of a grid of precomputed models. We propose a new general method to derive faster, lighter, and more accurate approximations of the model from a grid of precomputed models. These emulators are defined with artificial neural networks (ANNs) designed and trained to address the specificities inherent in ISM models. Indeed, such models often predict many observables (e.g., line intensities) from just a few input physical parameters and can yield outliers due to numerical instabilities or physical bistabilities. We propose applying five strategies to address these characteristics: 1) an outlier removal procedure; 2) a clustering method that yields homogeneous subsets of lines that are simpler to predict with different ANNs; 3) a dimension reduction technique that enables to adequately size the network architecture; 4) the physical inputs are augmented with a polynomial transform to ease the learning of nonlinearities; and 5) a dense architecture to ease the learning of simple relations. We compare the proposed ANNs with standard classes of interpolation methods to emulate the Meudon PDR code, a representative ISM numerical model. Combinations of the proposed strategies outperform all interpolation methods by a factor of 2 on the average error, reaching 4.5% on the Meudon PDR code. These networks are also 1000 times faster than accurate interpolation methods and require ten to forty times less memory. This work will enable efficient inferences on wide-field multiline observations of the ISM
    • 

    corecore